All locking begins by defining atomic operations, or critical sections: these are the basic pieces of a program that must run to completion, without the disturbance from third parties. In other words, atoms are all-or-nothing pieces of a program. Atoms are protected by GetLock(), ReleaseCurrentLock() parentheses within the program code.
GetLock(parameters)
/* Atom code */
ReleaseCurrentLock()
Serialized access to these atoms is assured by encapsulating each one with an exclusive lock. To create a locking policy, one must find the most efficient way of implementing resource control. If we lock objects which are too primitive (fine grain), we risk starting programs which will only run partially, unable to complete because of busy resources. This would simply constitute a waste of CPU time. On the other hand, if we lock objects which are too coarse, logically independent parts of the program will not be started at all. This is unnecessary and inefficient. In a concurrent environment there is no reason why independent atoms could not run in separate threads, allowing several instantiations of a batch program to `flow through' one another. This assumes however that the order of the atoms is not important.
By defining suitable atoms to lock, one is able to optimize the execution of the tasks in a program. Several approaches to locking may be considered. A lock-manager daemon is one possibility. This is analogous to many network license daemons: a daemon hands out tickets which are valid for a certain lifetime. After the ticket expires, the program is considered overdue and should be killed. A major problem with a daemon based locking mechanism is that it is highly time consuming and that it is susceptible to precisely the same problems as those which cause the uncertainty on program runtimes. Use of flock() is another possibility, but this is not completely portable. A realistic approach needs to be more compact and efficient.